Goto

Collaborating Authors

 symbolic manipulation




Limitations of Language Models in Arithmetic and Symbolic Induction

Qian, Jing, Wang, Hong, Li, Zekun, Li, Shiyang, Yan, Xifeng

arXiv.org Artificial Intelligence

Recent work has shown that large pretrained Language Models (LMs) can not only perform remarkably well on a range of Natural Language Processing (NLP) tasks but also start improving on reasoning tasks such as arithmetic induction, symbolic manipulation, and commonsense reasoning with increasing size of models. However, it is still unclear what the underlying capabilities of these LMs are. Surprisingly, we find that these models have limitations on certain basic symbolic manipulation tasks such as copy, reverse, and addition. When the total number of symbols or repeating symbols increases, the model performance drops quickly. We investigate the potential causes behind this phenomenon and examine a set of possible methods, including explicit positional markers, fine-grained computation steps, and LMs with callable programs. Experimental results show that none of these techniques can solve the simplest addition induction problem completely. In the end, we introduce LMs with tutor, which demonstrates every single step of teaching. LMs with tutor is able to deliver 100% accuracy in situations of OOD and repeating symbols, shedding new insights on the boundary of large LMs in induction.


What AI Can Tell Us About Intelligence

#artificialintelligence

In short, much of our understanding of the world is given by nature, with learning as a matter of fleshing out the details. There is an alternate, empiricist view which inverts this: symbolic manipulation is a rarity in nature, primarily arising as a learned capacity for communicating acquired gradually by our hominin ancestors over the last two million years. On this view, the primary cognitive capacities are non-symbolic learning abilities bound up with improving survival, such as rapidly recognizing prey, predicting their likely actions, and developing skillful responses. This assumes that the vast majority of complex cognitive abilities are acquired through a general, self-supervised learning capacity, one that acquires an intuitive world-model capable of the central features of common sense through experience. It also assumes that most of our complex cognitive capacities do not turn on symbolic manipulation; they make do, instead, with simulating various scenarios and predicting the best outcomes.